Learning Maximum Likelihood Semi-Naive Bayesian Network Classifier

نویسندگان

  • Kaizhu Huang
  • Irwin King
چکیده

In this paper, we propose a technique to construct a sub-optimal semi-naive Bayesian network when given a bound on the maximum number of variables that can be combined into a node. We theoretically show that our approach has a less computation cost when compared with the traditional semi-naive Bayesian network. At the same time, we can obtain a resulting sub-optimal structure according to the maximum likelihood criterion. We conduct a series of experiments to evaluate our approach. The results show our approach is encouraging and promising. Keywords— Bayesian network, Semi-Naive, Bound, Integer programming .

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Supervised Learning of Bayesian Network Parameters

Bayesian network models are widely used for supervised prediction tasks such as classification. Usually the parameters of such models are determined using ‘unsupervised’ methods such as likelihood maximization, as it has not been clear how to find the parameters maximizing the supervised likelihood or posterior globally. In this paper we show how this supervised learning problem can be solved e...

متن کامل

When Discriminative Learning of Bayesian Network Parameters Is Easy

Bayesian network models are widely used for discriminative prediction tasks such as classification. Usually their parameters are determined using ‘unsupervised’ methods such as maximization of the joint likelihood. The reason is often that it is unclear how to find the parameters maximizing the conditional (supervised) likelihood. We show how the discriminative learning problem can be solved ef...

متن کامل

Supervised Classification with Gaussian Networks. Filter and Wrapper Approaches

Bayesian network based classifiers are only able to handle discrete variables. They assume that variables are sampled from a multinomial distribution and most real-world domains involves continuous variables. A common practice to deal with continuous variables is to discretize them, with a subsequent loss of information. The continuous classifiers presented in this paper are supported by the Ga...

متن کامل

Supervised Learning of Bayesian Network Parameters Made Easy

Bayesian network models are widely used for supervised prediction tasks such as classification. Usually the parameters of such models are determined using ‘unsupervised’ methods such as maximization of the joint likelihood. In many cases, the reason is that it is not clear how to find the parameters maximizing the supervised (conditional) likelihood. We show how the supervised learning problem ...

متن کامل

Calculating the Normalized Maximum Likelihood Distribution for Bayesian Forests

When learning Bayesian network structures from sample data, an important issue is how to evaluate the goodness of alternative network structures. Perhaps the most commonly used model (class) selection criterion is the marginal likelihood, which is obtained by integrating over a prior distribution for the model parameters. However, the problem of determining a reasonable prior for the parameters...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002